Global Versus Local Constructive Function Approximation for On-Line Reinforcement Learning
نویسندگان
چکیده
In order to scale to problems with large or continuous state-spaces, reinforcement learning algorithms need to be combined with function approximation techniques. The majority of work on function approximation for reinforcement learning has so far focused either on global function approximation with a static structure (such as multi-layer perceptrons), or on constructive architectures using locally responsive units. The former, whilst achieving some notable successes, has also been shown to fail on some relatively simple tasks. The locally constructive approach has been shown to be more stable, but may scale poorly to higherdimensional inputs, as it will require a dramatic increase in resources. This paper explores the use of two constructive algorithms using non-locally responsive neurons based on the popular Cascade-Correlation supervised-learning algorithm. The algorithms are applied within the sarsa reinforcement learning algorithm, and their performance compared against both a multi-layer perceptron and a locally constructive algorithm (the Resource Allocating Network) across three reinforcement learning tasks. It is shown that the globally constructive algorithms are less stable, but that on some tasks they can achieve similar performance to the locally constructive approach, whilst generating much more compact solutions.
منابع مشابه
Multiagent Learning with a Noisy Global Reward Signal
Scaling multiagent reinforcement learning to domains with many agents is a complex problem. In particular, multiagent credit assignment becomes a key issue as the system size increases. Some multiagent systems suffer from a global reward signal that is very noisy or difficult to analyze. This makes deriving a learnable local reward signal very difficult. Difference rewards (a particular instanc...
متن کاملA Constructive Spiking Neural Network for Reinforcement Learning in Autonomous Control
This paper presents a method that draws upon reinforcement learning to perform autonomous learning through the automatic construction of a spiking artificial neural network. Constructive neural networks have been applied previously to state and action-value function approximation but have encountered problems of excessive growth of the network, difficulty generalising across a range of problems...
متن کاملReinforcement Learning and Distributed Local Model Synthesis
Reinforcement learning is a general and powerful way to formulate complex learning problems and acquire good system behaviour. The goal of a reinforcement learning system is to maximize a long term sum of instantaneous rewards provided by a teacher. In its extremum form, reinforcement learning only requires that the teacher can provide a measure of success. This formulation does not require a t...
متن کاملSparse Distributed Memories for On-Line Value-Based Reinforcement Learning
In this paper, we advocate the use of Sparse Distributed Memories (SDMs) for on-line, value-based reinforcement learning (RL). SDMs provide a linear, local function approximation scheme, designed to work when a very large/ high-dimensional input (address) space has to be mapped into a much smaller physical memory. We present an implementation of the SDM architecture for on-line, value-based RL ...
متن کاملSUPER- AND SUB-ADDITIVE ENVELOPES OF AGGREGATION FUNCTIONS: INTERPLAY BETWEEN LOCAL AND GLOBAL PROPERTIES, AND APPROXIMATION
Super- and sub-additive transformations of aggregation functions have been recently introduced by Greco, Mesiar, Rindone and v{S}ipeky [The superadditive and the subadditive transformations of integrals and aggregation functions, {it Fuzzy Sets and Systems} {bf 291} (2016), 40--53]. In this article we give a survey of the recent development regarding the existence of aggregation functions with ...
متن کامل